330 research outputs found

    Orienting auditory attention in time: Lateralized alpha power reflects spatio-temporal filtering

    Get PDF
    The deployment of neural alpha (8–12 Hz) lateralization in service of spatial attention is well-established: Alpha power increases in the cortical hemisphere ipsilateral to the attended hemifield, and decreases in the contralateral hemisphere, respectively. Much less is known about humans’ ability to deploy such alpha lateralization in time, and to thus exploit alpha power as a spatio-temporal filter. Here we show that spatially lateralized alpha power does signify – beyond the direction of spatial attention – the distribution of attention in time and thereby qualifies as a spatio-temporal attentional filter. Participants (N = 20) selectively listened to spoken numbers presented on one side (left vs right), while competing numbers were presented on the other side. Key to our hypothesis, temporal foreknowledge was manipulated via a visual cue, which was either instructive and indicated the to-be-probed number position (70% valid) or neutral. Temporal foreknowledge did guide participants’ attention, as they recognized numbers from the to-be-attended side more accurately following valid cues. In the magnetoencephalogram (MEG), spatial attention to the left versus right side induced lateralization of alpha power in all temporal cueing conditions. Modulation of alpha lateralization at the 0.8 Hz presentation rate of spoken numbers was stronger following instructive compared to neutral temporal cues. Critically, we found stronger modulation of lateralized alpha power specifically at the onsets of temporally cued numbers. These results suggest that the precisely timed hemispheric lateralization of alpha power qualifies as a spatio-temporal attentional filter mechanism susceptible to top-down behavioural goals

    Auditory skills and brain morphology predict individual differences in adaptation to degraded speech

    No full text
    Noise-vocoded speech is a spectrally highly degraded signal, but it preserves the temporal envelope of speech. Listeners vary considerably in their ability to adapt to this degraded speech signal. Here, we hypothesized that individual differences in adaptation to vocoded speech should be predictable by non-speech auditory, cognitive, and neuroanatomical factors. We tested eighteen normal-hearing participants in a short-term vocoded speech-learning paradigm (listening to 100 4-band-vocoded sentences). Non-speech auditory skills were assessed using amplitude modulation (AM) rate discrimination, where modulation rates were centered on the speech-relevant rate of 4 Hz. Working memory capacities were evaluated (digit span and nonword repetition), and structural MRI scans were examined for anatomical predictors of vocoded speech learning using voxel-based morphometry. Listeners who learned faster to understand degraded speech also showed smaller thresholds in the AM discrimination task. This ability to adjust to degraded speech is furthermore reflected anatomically in increased volume in an area of the left thalamus (pulvinar) that is strongly connected to the auditory and prefrontal cortices. Thus, individual non-speech auditory skills and left thalamus grey matter volume can predict how quickly a listener adapts to degraded speech

    Age-related differences in the neural network interactions underlying the predictability gain

    Get PDF
    Speech comprehension is often challenged by increased background noise, but can be facilitated via the semantic context of a sentence. This predictability gain relies on an interplay of language-specific semantic and domain-general brain regions. However, age-related differences in the interactions within and between semantic and domain-general networks remain poorly understood. Using functional neuroimaging, we investigated commonalities and differences in network interactions enabling processing of degraded speech in healthy young and old participants. Participants performed a sentence repetition task while listening to sentences with high and low predictable endings and varying intelligibility. Stimulus intelligibility was adjusted to individual hearing abilities. Older adults showed an undiminished behavioural predictability gain. Likewise, both groups recruited a similar set of semantic and cingulo-opercular brain regions. However, we observed age-related differences in effective connectivity for high predictable speech of increasing intelligibility. Young adults exhibited stronger connectivity between regions of the cingulo-opercular network and between left insula and the posterior middle temporal gyrus. Moreover, these interactions were excitatory in young adults but inhibitory in old adults. Finally, the degree of the inhibitory influence between cingulo-opercular regions was predictive of the behavioural sensitivity towards changes in intelligibility for high predictable sentences in older adults only. Our results demonstrate that the predictability gain is relatively preserved in older adults when stimulus intelligibility is individually adjusted. While young and old participants recruit similar brain regions, differences manifest in underlying network interactions. Together, these results suggest that ageing affects the network configuration rather than regional activity during successful speech comprehension under challenging listening conditions

    Predicting speech from a cortical hierarchy of event-based timescales

    Get PDF
    How do predictions in the brain incorporate the temporal unfolding of context in our natural environment? We here provide evidence for a neural coding scheme that sparsely updates contextual representations at the boundary of events. This yields a hierarchical, multilayered organization of predictive language comprehension. Training artificial neural networks to predict the next word in a story at five stacked time scales and then using model-based functional magnetic resonance imaging, we observe an event-based “surprisal hierarchy” evolving along a temporoparietal pathway. Along this hierarchy, surprisal at any given time scale gated bottom-up and top-down connectivity to neighboring time scales. In contrast, surprisal derived from continuously updated context influenced temporoparietal activity only at short time scales. Representing context in the form of increasingly coarse events constitutes a network architecture for making predictions that is both computationally efficient and contextually diverse

    Distributed networks for auditory memory differentially contribute to recall precision

    Get PDF
    Re-directing attention to objects in working memory can enhance their representational fidelity. However, how this attentional enhancement of memory representations is implemented across distinct, sensory and cognitive-control brain network is unspecified. The present fMRI experiment leverages psychophysical modelling and multivariate auditory-pattern decoding as behavioral and neural proxies of mnemonic fidelity. Listeners performed an auditory syllable pitch-discrimination task and received retro-active cues to selectively attend to a to-be-probed syllable in memory. Accompanied by increased neural activation in fronto-parietal and cingulo-opercular networks, valid retro-cues yielded faster and more perceptually sensitive responses in recalling acoustic detail of memorized syllables. Information about the cued auditory object was decodable from hemodynamic response patterns in superior temporal sulcus (STS), fronto-parietal, and sensorimotor regions. However, among these regions retaining auditory memory objects, neural fidelity in the left STS and its enhancement through attention-to-memory best predicted individuals’ gain in auditory memory recall precision. Our results demonstrate how functionally discrete brain regions differentially contribute to the attentional enhancement of memory representations

    Neural signatures of task-related fluctuations in auditory attention change with age

    Get PDF
    Listening in everyday life requires attention to be deployed dynamically – when listening is expected to be difficult and when relevant information is expected to occur – to conserve mental resources. Conserving mental resources may be particularly important for older adults who often experience difficulties understanding speech. We use electro- and magnetoencephalography to investigate the neural and behavioral mechanics of dynamic attention regulation during listening and the effects that aging may have on these. We show that neural alpha oscillatory activity indicates when in time attention is deployed (Experiment 1) and that deployment depends on listening difficulty (Experiment 2). Older adults also show successful attention regulation, although younger adults appear to utilize timing information a bit differently compared to older adults. We further show that the recruited brain regions differ between age groups. Superior parietal cortex is involved in attention regulation in younger adults, whereas posterior temporal cortex is more involved in older adults (Experiment 3). This difference in the sources of alpha activity across age groups was only observed when a task was performed, and not for alpha activity during resting-state recordings (Experiment S1). In sum, our study suggests that older adults employ different neural control strategies compared to younger adults to regulate attention in time under listening challenges

    Modality-specific tracking of attention and sensory statistics in the human electrophysiological spectral exponent

    Get PDF
    A hallmark of electrophysiological brain activity is its 1/f-like spectrum – power decreases with increasing frequency. The steepness of this ‘roll-off’ is approximated by the spectral exponent, which in invasively recorded neural populations reflects the balance of excitatory to inhibitory neural activity (E:I balance). Here, we first establish that the spectral exponent of non-invasive electroencephalography (EEG) recordings is highly sensitive to general (i.e., anaesthesia-driven) changes in E:I balance. Building on the EEG spectral exponent as a viable marker of E:I, we then demonstrate its sensitivity to the focus of selective attention in an EEG experiment during which participants detected targets in simultaneous audio-visual noise. In addition to these endogenous changes in E:I balance, EEG spectral exponents over auditory and visual sensory cortices also tracked auditory and visual stimulus spectral exponents, respectively. Individuals’ degree of this selective stimulus–brain coupling in spectral exponents predicted behavioural performance. Our results highlight the rich information contained in 1/f-like neural activity, providing a window into diverse neural processes previously thought to be inaccessible in non-invasive human recordings
    • 

    corecore